A Non-Revisiting Equilibrium Optimizer Algorithm

نویسندگان

چکیده

The equilibrium optimizer (EO) is a novel physics-based meta-heuristic optimization algorithm that inspired by estimating dynamics and states in controlled volume mass balance models. As stochastic algorithm, EO inevitably produces duplicated solutions, which wasteful of valuable evaluation opportunities. In addition, an excessive number solutions can increase the risk getting trapped local optima. this paper, improved with bis-population-based non-revisiting (BNR) mechanism proposed, namely BEO. It aims to eliminate duplicate generated population during iterations, thus avoiding wasted Furthermore, when revisited solution detected, BNR activates its unique archive learning assist generating high-quality using excellent genes historical information, not only improves algorithm's diversity but also helps get out optimum dilemma. Experimental findings IEEE CEC2017 benchmark demonstrate proposed BEO outperforms other seven representative techniques, including original algorithm.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Revisiting the Paxos Algorithm Revisiting the Paxos Algorithm

The paxos algorithm is an e cient and highly fault-tolerant algorithm, devised by Lamport, for reaching consensus in a distributed system. Although it appears to be practical, it seems to be not widely known or understood. This thesis contains a new presentation of the paxos algorithm, based on a formal decomposition into several interacting components. It also contains a correctness proof and ...

متن کامل

Revisiting the Paxos Algorithm

The PAXOS algorithm is an efficient and highly fault-tolerant algorithm, devised by Lamport, for reaching consensus in a distributed system. Although it appears to be practical, it seems to be not widely known or understood. This thesis contains a new presentation of the PAXOS algorithm, based on a formal decomposition into several interacting components. It also contains a correctness proof an...

متن کامل

Neumann Optimizer: A Practical Optimization Algorithm for Deep Neural Networks

Progress in deep learning is slowed by the days or weeks it takes to train large models. The natural solution of using more hardware is limited by diminishing returns, and leads to inefficient use of additional resources. In this paper, we present a large batch, stochastic optimization algorithm that is both faster than widely used algorithms for fixed amounts of computation, and also scales up...

متن کامل

Neumann Optimizer: a Practical Optimization Algorithm for Deep Neural Networks

Progress in deep learning is slowed by the days or weeks it takes to train large models. The natural solution of using more hardware is limited by diminishing returns, and leads to inefficient use of additional resources. In this paper, we present a large batch, stochastic optimization algorithm that is both faster than widely used algorithms for fixed amounts of computation, and also scales up...

متن کامل

Neumann Optimizer: a Practical Optimization Algorithm for Deep Neural Networks

Progress in deep learning is slowed by the days or weeks it takes to train large models. The natural solution of using more hardware is limited by diminishing returns, and leads to inefficient use of additional resources. In this paper, we present a large batch, stochastic optimization algorithm that is both faster than widely used algorithms for fixed amounts of computation, and also scales up...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEICE Transactions on Information and Systems

سال: 2023

ISSN: ['0916-8532', '1745-1361']

DOI: https://doi.org/10.1587/transinf.2022edp7119